Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Internet-of-Things (IoT) approaches are continually introducing new sensors into the fields of agriculture and animal welfare. The application of multi-sensor data fusion to these domains remains a complex and open-ended challenge that defies straightforward optimization, often requiring iterative testing and refinement. To respond to this need, we have created a new open-source framework as well as a corresponding Python tool which we call the “Data Fusion Explorer (DFE)”. We demonstrated and evaluated the effectiveness of our proposed framework using four early-stage datasets from diverse disciplines, including animal/environmental tracking, agrarian monitoring, and food quality assessment. This included data across multiple common formats including single, array, and image data, as well as classification or regression and temporal or spatial distributions. We compared various pipeline schemes, such as low-level against mid-level fusion, or the placement of dimensional reduction. Based on their space and time complexities, we then highlighted how these pipelines may be used for different purposes depending on the given problem. As an example, we observed that early feature extraction reduced time and space complexity in agrarian data. Additionally, independent component analysis outperformed principal component analysis slightly in a sweet potato imaging dataset. Lastly, we benchmarked the DFE tool with respect to the Vanilla Python3 packages using our four datasets’ pipelines and observed a significant reduction, usually more than 50%, in coding requirements for users in almost every dataset, suggesting the usefulness of this package for interdisciplinary researchers in the field.more » « lessFree, publicly-accessible full text available July 1, 2026
-
Guide dogs play a crucial role in enhancing independence and mobility for people with visual impairment, offering invaluable assistance in navigating daily tasks and environments. However, the extensive training required for these dogs is costly, resulting in a limited availability that does not meet the high demand for such skilled working animals. Towards optimizing the training process and to better understand the challenges these guide dogs may be experiencing in the field, we have created a multi-sensor smart collar system. In this study, we developed and compared two supervised machine learning methods to analyze the data acquired from these sensors. We found that the Convolutional Long Short-Term Memory (Conv-LSTM) network worked much more efficiently on subsampled data and Kernel Principal Component Analysis (KPCA) on interpolated data. Each attained approximately 40% accuracy on a 10-state system. Not needing training, KPCA is a much faster method, but not as efficient with larger datasets. Among various sensors on the collar system, we observed that the inertial measurement units account for the vast majority of predictability, and that the addition of environmental acoustic sensing data slightly improved performance in most datasets. We also created a lexicon of data patterns using an unsupervised autoencoder. We present several regions of relatively higher density in the latent variable space that correspond to more common patterns and our attempt to visualize these patterns. In this preliminary effort, we found that several test states could be combined into larger superstates to simplify the testing procedures. Additionally, environmental sensor data did not carry much weight, as air conditioning units maintained the testing room at standard conditions.more » « lessFree, publicly-accessible full text available December 1, 2025
-
Canine-assisted interactions (CAIs) have been explored to offer therapeutic benefits to human participants in various contexts, from addressing cancer-related fatigue to treating post-traumatic stress disorder. Despite their widespread adoption, there are still unresolved questions regarding the outcomes for both humans and animals involved in these interactions. Previous attempts to address these questions have suffered from core methodological weaknesses, especially due to absence of tools for an efficient objective evaluation and lack of focus on the canine perspective. In this article, we present a first-of-its-kind system and study to collect simultaneous and continuous physiological data from both of the CAI interactants. Motivated by our extensive field reviews and stakeholder feedback, this comprehensive wearable system is composed of custom-designed and commercially available sensor devices. We performed a repeated-measures pilot study, to combine data collected via this system with a novel dyadic behavioral coding method and short- and long-term surveys. We evaluated these multimodal data streams independently, and we further correlated the psychological, physiological, and behavioral metrics to better elucidate the outcomes and dynamics of CAIs. Confirming previous field results, human electrodermal activity is the measure most strongly distinguished between the dyads’ non-interaction and interaction periods. Valence, arousal, and the positive affect of the human participant significantly increased during interaction with the canine participant. Also, we observed in our pilot study that (a) the canine heart rate was more dynamic than the human’s during interactions, (b) the surveys proved to be the best indicator of the subjects’ affective state, and (c) the behavior coding approaches best tracked the bond quality between the interacting dyads. Notably, we found that most of the interaction sessions were characterized by extended neutral periods with some positive and negative peaks, where the bonded pairs might display decreased behavioral synchrony. We also present three new representations of the internal and overall dynamics of CAIs for adoption by the broader field. Lastly, this paper discusses ongoing options for further dyadic analysis, interspecies emotion prediction, integration of contextually relevant environmental data, and standardization of human–animal interaction equipment and analytical approaches. Altogether, this work takes a significant step forward on a promising path to our better understanding of how CAIs improve well-being and how interspecies psychophysiological states can be appropriately measured.more » « lessFree, publicly-accessible full text available December 1, 2025
-
Free, publicly-accessible full text available December 2, 2025
-
Existing machine-learning work has shown that algorithms can bene t from curricula---learning fi rst on simple examples before moving to more difficult examples. While most existing work on curriculum learning focuses on developing automatic methods to iteratively select training examples with increasing difficulty tailored to the current ability of the learner, relatively little attention has been paid to the ways in which humans design curricula. We argue that a better understanding of the human-designed curricula could give us insights into the development of new machine-learning algorithms and interfaces that can better accommodate machine- or human-created curricula. Our work addresses this emerging and vital area empirically, taking an important step to characterize the nature of human-designed curricula relative to the space of possible curricula and the performance benefits that may (or may not) occur.more » « less
-
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner’s current policy. We present empirical results that show this assumption to be false—whether human trainers give a positive or negative feedback for a decision is influenced by the learner’s current policy. Based on this insight, we introduce Convergent Actor-Critic by Humans (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.more » « less
-
As robots become pervasive in human environments, it is important to enable users to effectively convey new skills without programming. Most existing work on Interactive Reinforcement Learning focuses on interpreting and incorporating non-expert human feedback to speed up learning; we aim to design a better representation of the learning agent that is able to elicit more natural and effective communication between the human trainer and the learner, while treating human feedback as discrete communication that depends probabilistically on the trainer's target policy. This work entails a user study where participants train a virtual agent to accomplish tasks by giving reward and/or punishment in a variety of simulated environments. We present results from 60 participants to show how a learner can ground natural language commands and adapt its action execution speed to learn more efficiently from human trainers. The agent's action execution speed can be successfully modulated to encourage more explicit feedback from a human trainer in areas of the state space where there is high uncertainty. Our results show that our novel adaptive speed agent dominates different fixed speed agents on several measures of performance. Additionally, we investigate the impact of instructions on user performance and user preference in training conditions.more » « less
-
As robots become pervasive in human environments, it is important to enable users to effectively convey new skills without programming. Most existing work on Interactive Reinforcement Learning focuses on interpreting and incorporating non-expert human feedback to speed up learning; we aim to design a better representation of the learning agent that is able to elicit more natural and effective communication between the human trainer and the learner, while treating human feedback as discrete communication that depends probabilistically on the trainer’s target policy. This work entails a user study where participants train a virtual agent to accomplish tasks by giving reward and/or punishment in a variety of simulated environments. We present results from 60 participants to show how a learner can ground natural language commands and adapt its action execution speed to learn more efficiently from human trainers. The agent’s action execution speed can be successfully modulated to encourage more explicit feedback from a human trainer in areas of the state space where there is high uncertainty. Our results show that our novel adaptive speed agent dominates different fixed speed agents on several measures of performance. Additionally, we investigate the impact of instructions on user performance and user preference in training conditions.more » « less
An official website of the United States government

Full Text Available